21 research outputs found

    Visible and hidden observables in super-linearization

    Full text link
    We call a system super-linearizable if it admits finite-dimensional embedding as a linear system -- known as a finite-dimensional Koopman embedding; said otherwise, if its dynamics can be linearized by adding a finite set of observables. We introduce the notions of visible and hidden observables for such embeddings which, roughly speaking, are the observables that explicitly appear in the original system and the ones that do not, but yet are necessary for its embedding. Distinct embeddings can have different numbers of hidden and visible observables. In this paper, we derive a tight lower bound for the number of visible observables of a system among all its super-linearizations

    On sparse representations of linear operators and the approximation of matrix products

    Full text link
    Thus far, sparse representations have been exploited largely in the context of robustly estimating functions in a noisy environment from a few measurements. In this context, the existence of a basis in which the signal class under consideration is sparse is used to decrease the number of necessary measurements while controlling the approximation error. In this paper, we instead focus on applications in numerical analysis, by way of sparse representations of linear operators with the objective of minimizing the number of operations needed to perform basic operations (here, multiplication) on these operators. We represent a linear operator by a sum of rank-one operators, and show how a sparse representation that guarantees a low approximation error for the product can be obtained from analyzing an induced quadratic form. This construction in turn yields new algorithms for computing approximate matrix products.Comment: 6 pages, 3 figures; presented at the 42nd Annual Conference on Information Sciences and Systems (CISS 2008
    corecore